Goto

Collaborating Authors

 learning method




[Appendix ] GraphSelf-supervisedLearning withAccurateDiscrepancyLearning

Neural Information Processing Systems

Organization In Section A, we first introduce the baselines and our model and then describe the experimental details of graph classification and link prediction tasks but also our in-depth analyses. Then, in Section B, we provide the additional experimental results about analyses on datasets, ablation study for our proposed objectives, effects of our hyperparameters (λ1, α, λ2, and the perturbation magnitude), ablation study of attribute masking, and the comparison with augmentation-freeapproaches. In particular,thepre-training dataset consists of306K unlabeled protein ego-networksof50species,andthe fine-tuning dataset consists of 88K protein ego-networks of 8 species with the label given by the functionalityoftheegoprotein. For pre-training, the number of epochs is 100, the batch size is128, the learning rate is0.001, and the margin is10. For fine-tuning, we also follow the conventional setting from Hu et al.[3]. ForJOAOandGraphLoG, we use the publicsource codes4,toobtain the pre-trained models.






AdversarialSelf-SupervisedContrastiveLearning

Neural Information Processing Systems

Wevalidate ourmethod, RobustContrastiveLearning(RoCL),onmultiplebenchmarkdatasets, on which itobtains comparable robust accuracyover state-of-the-art supervised adversarial learning methods, and significantly improved robustness against the black boxand unseen types of attacks.


A Bi-Level Framework for Learning to Solve Combinatorial Optimization on Graphs

Neural Information Processing Systems

Combinatorial Optimization (CO) has been a long-standing challenging research topic featured by its NP-hard nature. Traditionally such problems are approximately solved with heuristic algorithms which are usually fast but may sacrifice the solution quality. Currently, machine learning for combinatorial optimization (MLCO) has become a trending research topic, but most existing MLCO methods treat CO as a single-level optimization by directly learning the end-to-end solutions, which are hard to scale up and mostly limited by the capacity of ML models given the high complexity of CO. In this paper, we propose a hybrid approach to combine the best of the two worlds, in which a bi-level framework is developed with an upper-level learning method to optimize the graph (e.g.


Credit Assignment in Neural Networks through Deep Feedback Control

Neural Information Processing Systems

The success of deep learning sparked interest in whether the brain learns by using similar techniques for assigning credit to each synaptic weight for its contribution to the network output. However, the majority of current attempts at biologically-plausible learning methods are either non-local in time, require highly specific connectivity motifs, or have no clear link to any known mathematical optimization method. Here, we introduce Deep Feedback Control (DFC), a new learning method that uses a feedback controller to drive a deep neural network to match a desired output target and whose control signal can be used for credit assignment. The resulting learning rule is fully local in space and time and approximates Gauss-Newton optimization for a wide range of feedback connectivity patterns. To further underline its biological plausibility, we relate DFC to a multi-compartment model of cortical pyramidal neurons with a local voltage-dependent synaptic plasticity rule, consistent with recent theories of dendritic processing. By combining dynamical system theory with mathematical optimization theory, we provide a strong theoretical foundation for DFC that we corroborate with detailed results on toy experiments and standard computer-vision benchmarks.